Recent breakthroughs in semi-supervised semantic segmentation have been developed through contrastive learning. In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space. However, there exist inaccurate pseudo-labels which map the ambiguous representations of pixels to the wrong classes due to the limited cognitive ability of the model. In this paper, we define pixel-wise representations from a new perspective of probability theory and propose a Probabilistic Representation Contrastive Learning (PRCL) framework that improves representation quality by taking its probability into consideration. Through modelling the mapping from pixels to representations as the probability via multivariate Gaussian distributions, we can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels. Furthermore, we define prototypes in the form of distributions, which indicates the confidence of a class, while the point prototype cannot. Moreover, we propose to regularize the distribution variance to enhance the reliability of representations. Taking advantage of these benefits, high-quality feature representations can be derived in the latent space, thereby the performance of semantic segmentation can be further improved. We conduct sufficient experiment to evaluate PRCL on Pascal VOC and CityScapes to demonstrate its superiority. The code is available at https://github.com/Haoyu-Xie/PRCL.
translated by 谷歌翻译
半监督语义分割的流行方法主要采用了使用卷积神经网络(CNN)(CNN)的统一网络模型,并在应用于输入或模型的小型扰动上实施模型预测的一致性。但是,这种学习范式受到a)基于CNN模型的学习能力有限; b)学习未标记数据的判别特征的能力有限; c)从整个图像中对全球和本地信息的学习有限。在本文中,我们提出了一种新型的半监督学习方法,称为Transformer-CNN队列(TCC),该方法由两个基于视觉变压器(VIT)的学生组成,另一种是基于CNN的学生。我们的方法巧妙地通过伪标记来纳入预测和异质特征空间上的多级一致性正则化,用于未标记的数据。首先,由于VIT学生的输入是图像贴片,因此特征地图提取了编码至关重要的类统计。为此,我们建议首先利用每个学生作为伪标签并生成类吸引功能(CF)映射的班级感知功能一致性蒸馏(CFCD)。然后,它通过学生之间的CF地图传输知识。其次,随着VIT学生对所有层具有更统一的表示,我们提出一致性感知的交叉蒸馏以在类像素方面的预测之间转移知识。我们在CityScapes和Pascal VOC 2012数据集上验证了TCC框架,该数据集大大优于现有的半监督方法。
translated by 谷歌翻译
如今,无线通信正在迅速重塑整个行业。特别是,移动边缘计算(MEC)是一种用于工业互联网(IIOT)的促成技术,它使强大的计算/存储基础架构更靠近移动终端,从而大大降低了响应延迟。为了获得在网络边缘积极缓存的好处,对最终设备之间的受欢迎程度的精确知识至关重要。但是,在许多IIOT场景中,内容流行的内容流行以及数据私人关系的复杂性质对其获取构成了艰巨的挑战。在本文中,我们建议针对MEC启用的IIOT提供无监督和保护隐私的普及预测框架。引入了本地和全球流行的概念,并将每个用户的随时间变化为无模型的马尔可夫链。在此基础上,提出了一种新颖的无监督的复发性联合学习(URFL)算法,以预测分布式的流行,同时实现隐私保护和无监督的培训。仿真表明,提出的框架可以根据降低的根平方误差提高预测准确性,高达$ 60.5 \%-68.7 \%$。此外,避免了手动标签和违反用户数据隐私的行为。
translated by 谷歌翻译
最近,提出了随机特征专注(RFA),以通过线性化指数核来近似线性时间和空间复杂性的软磁性注意力。在本文中,我们首先提出了一种新颖的观点,以通过将RFA重新铸造为自称的重要性采样器来理解这种近似值的偏见。这种观点进一步阐明了整个软磁注意的\ emph {nobaled}估计量,称为随机注意(RA)。RA通过特定的分布构建积极的随机特征,并享有极大的改善近似保真度,尽管表现出二次复杂性。通过结合RA中的表现力和RFA的效率,我们开发了一种新型的线性复杂性自我发项机制,称为线性随机注意(LARA)。跨各个领域的广泛实验表明,RA和LARA可显着提高RFA的性能。
translated by 谷歌翻译
分子动力学(MD)仿真通过用数字积分器解决牛顿运动方程来预测原子的轨迹。由于物理限制,积分器的时间步长需要很小以维持足够的精度。这限制了模拟效率。为此,我们介绍了一个基于图形神经网络(GNN)的模型,MDNet,以预测坐标和动量的演变与大的时间阶跃。此外,由于其线性复杂性相对于系统尺寸,MDNET可以容易地扩展到更大的系统。我们展示了MDNET在具有大时间步骤的4000原子系统上的性能,并显示MDNET可以预测良好的平衡和运输特性,与标准MD模拟良好对齐。
translated by 谷歌翻译
半监督学习在医疗领域取得了重大进展,因为它减轻了收集丰富的像素的沉重负担,用于针对语义分割任务。现有的半监督方法增强了利用从有限标记数据获得的现有知识从未标记数据提取功能的能力。然而,由于标记数据的稀缺性,模型提取的特征在监督学习中受到限制,并且对未标记数据的预测质量也无法保证。两者都将妨碍一致培训。为此,我们提出了一种新颖的不确定性感知计划,以使模型自动学习地区。具体而言,我们采用Monte Carlo采样作为获得不确定性地图的估计方法,该方法可以作为损失损失的重量,以强制根据监督学习和无监督学习的特征将模型专注于有价值的区域。同时,在后退过程中,我们通过增强不同任务之间的梯度流动,联合无监督和监督损失来加速网络的融合。定量地,我们对三个挑战的医疗数据集进行了广泛的实验。实验结果表明,最先进的对应物的理想改善。
translated by 谷歌翻译
移动边缘计算(MEC)是一个突出的计算范例,它扩展了无线通信的应用领域。由于用户设备和MEC服务器的能力的限制,边缘缓存(EC)优化对于有效利用启用MEC的无线网络中的高速利用。然而,内容普及空间和时间的动态和复杂性以及用户的隐私保护对EC优化构成了重大挑战。在本文中,提出了一种隐私保留的分布式深度确定性政策梯度(P2D3PG)算法,以最大化MEC网络中设备的高速缓存命中率。具体而言,我们认为内容流行度是动态,复杂和不可观察的事实,并制定了在隐私保存的限制下作为分布式问题的设备的高速缓存命中速率的最大化。特别是,我们将分布式优化转换为分布式的无模型马尔可夫决策过程问题,然后介绍一种隐私保留的联合学习方法,用于普及预测。随后,基于分布式增强学学习开发了P2D3PG算法以解决分布式问题。仿真结果表明,在保护用户隐私的同时通过基线方法提高EC击中率的提出方法的优越性。
translated by 谷歌翻译
本文回顾了关于压缩视频质量增强质量的第一个NTIRE挑战,重点是拟议的方法和结果。在此挑战中,采用了新的大型不同视频(LDV)数据集。挑战有三个曲目。Track 1和2的目标是增强HEVC在固定QP上压缩的视频,而Track 3旨在增强X265压缩的视频,以固定的位速率压缩。此外,轨道1和3的质量提高了提高保真度(PSNR)的目标,以及提高感知质量的2个目标。这三个曲目完全吸引了482个注册。在测试阶段,分别提交了12个团队,8支球队和11支球队,分别提交了轨道1、2和3的最终结果。拟议的方法和解决方案衡量视频质量增强的最先进。挑战的首页:https://github.com/renyang-home/ntire21_venh
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译